-
Notifications
You must be signed in to change notification settings - Fork 280
KubeArchive: install on production #6407
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
KubeArchive: install on production #6407
Conversation
rh-hemartin
commented
May 19, 2025
- Added production clusters
- Patches are duplicated because we need them to be this way for clusters to be independent.
- The remote-argocd account will need permissions to create clusterkubearchive.kubearchive.org/v1
- The kyverno user will need permissions to create kubearchiveconfigs.kubearchive.org/v1
- Before this get merged I need someone to initialize the databases, let me know and I can help you, instructions are:
@hugares @manish-jangra do you know which is the correct path for the database secrets? I just assumed the same path as the staging secret. |
/hold |
Could you also add the configuration needed under NOTE: There is no need to add the Route as the service doesn't need to be exposed for that. That is something we need to deal with when it comes to the kubearchive CLI usage, but not for the API per se. |
9b9fa2d
to
d6966f9
Compare
Done! |
/lgtm |
/retest |
Ping |
/lgtm |
- nameNormalized: stone-prd-m01 | ||
values.clusterDir: stone-prd-m01 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- nameNormalized: stone-prd-m01 | |
values.clusterDir: stone-prd-m01 |
m01 is deprecated and it will be removed soon. @hugares should skip it in this rollout already?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed it, I can add it back if needed.
cpu: 200m | ||
memory: 1Gi | ||
requests: | ||
cpu: 200m | ||
memory: 900Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the development overlay is used in e2e tests. Do you think we can reduce deployments' resources
in this overlay so we'll require smaller clusters during e2e tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reduced the memory for all deployments, currently we are sitting at ~30Mi on staging because we introduced a fix that solved the high memory usage so I set it to 128Mi. I can adjust it even lower for development if you want.
limits: | ||
cpu: 500m | ||
requests: | ||
cpu: 10m | ||
memory: 256Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to prevent eviction of this service in production, we'd like to ensure the deployment has Guaranteed QoS (i.e. limits==requests
). This suggestion applies also to the other KubeArchive's deployments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done for all deployments. However it applies to just memory or CPU as well?
Signed-off-by: Hector Martinez <[email protected]>
d6966f9
to
1ccc9f0
Compare
/lgtm |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mafh314, maruiz93, ralphbean, rh-hemartin, skoved The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |